Learning in Linear Feature-Discovery Networks *

نویسنده

  • Todd K. Leen
چکیده

We describe the dynamics of learning in unsupervised linear feature-discovery networks that have recurrent lateral connections. Bifurcation theory provides a description of the location of multiple equilibria and limit cycles in the weight-space dynamics.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Novel Radial Basis Function Neural Networks based on Probabilistic Evolutionary and Gaussian Mixture Model for Satellites Optimum Selection

In this study, two novel learning algorithms have been applied on Radial Basis Function Neural Network (RBFNN) to approximate the functions with high non-linear order. The Probabilistic Evolutionary (PE) and Gaussian Mixture Model (GMM) techniques are proposed to significantly minimize the error functions. The main idea is concerning the various strategies to optimize the procedure of Gradient ...

متن کامل

Introducing a method for extracting features from facial images based on applying transformations to features obtained from convolutional neural networks

In pattern recognition, features are denoting some measurable characteristics of an observed phenomenon and feature extraction is the procedure of measuring these characteristics. A set of features can be expressed by a feature vector which is used as the input data of a system. An efficient feature extraction method can improve the performance of a machine learning system such as face recognit...

متن کامل

Learning Markov Blankets for Continuous or Discrete Networks via Feature Selection

Markov Blankets discovery algorithms are important for learning a Bayesian network structure. We present an argument that tree ensemble masking measures can provide an approximate Markov blanket. Then an ensemble feature selection method is used to learn Markov blankets for either discrete or continuous networks (without linear, Gaussian assumptions). We compare our algorithm in the causal stru...

متن کامل

On the convergence speed of artificial neural networks in‎ ‎the solving of linear ‎systems

‎Artificial neural networks have the advantages such as learning, ‎adaptation‎, ‎fault-tolerance‎, ‎parallelism and generalization‎. ‎This ‎paper is a scrutiny on the application of diverse learning methods‎ ‎in speed of convergence in neural networks‎. ‎For this aim‎, ‎first we ‎introduce a perceptron method based on artificial neural networks‎ ‎which has been applied for solving a non-singula...

متن کامل

Dynamics of Learning in Recurrent Feature-Discovery Networks

The self-organization of recurrent feature-discovery networks is studied from the perspective of dynamical systems. Bifurcation theory reveals parameter regimes in which multiple equilibria or limit cycles coexist with the equilibrium at which the networks perform principal component analysis.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1991